
The algorithm will perform well when the reference and input images are linear transformations of each other. It will perform poorly if the reference and input images are non-linear warps of each other.
We will evaluate the performance of the algorithm on two real datasets: BNU1 and BNU2. The BNU1 dataset consists of 46 subjects, each of which have two scan sessions. This gives us a total of 92 brain samples to work with, each of which are 4D scans: 91 x 109 x 91 brain at 200 timesteps. The BNU2 dataset is the same except there are 61 subjects with 2 sessions each, giving us 122 samples in total. A sample brain from both datasets is shown below.


I expect the algorithm to perform well when registering the BNU1 dataset to the MNI152 template because the overall shape of the input and reference images are linear transformations of each other.
Below is the code used to generate simulated data and run the algorithm code on said data. We will perform two types of simulations: good and bad. The good simulations will consist of inputs and references that are linear transformations of each other, so the algorithm will perform well. The bad simulations will consists of inputs and references that are non-linear transformations of each other, so the algorithm will perform poorly.
For the good simulations, the input will be a cuboid and the reference will be a linearly transformed cuboid. For the bad simulations, the input will be a cuboid and the reference will be a deformed circle.
We will qualitatively assess the results of the simulations by overlaying the reference and the registered input image, and checking how well they match up. We will quantitatively assess the results of the simulations by taking the MSE between the reference and the registered input image.
Code and results below:
%matplotlib inline
from ndreg import * import math import numpy as np import SimpleITK as itk
for lolololol in range(1): n = 200 a = np.random.randint(low=50, high=150, size=1, dtype=np.int16)[0] b = np.random.randint(low=50, high=150, size=1, dtype=np.int16)[0] c = np.random.randint(low=50, high=150, size=1, dtype=np.int16)[0] d = np.random.randint(low=50, high=150, size=1, dtype=np.int16)[0] r = np.random.randint(low=20, high=50, size=1, dtype=np.int16)[0] y,x = np.ogrid[-a:n-a, -b:n-b] z,w = np.ogrid[-c:n-c, -d:n-d] mask = xx + yy <= rr mask2 = ww + zz <= rr
checkers = np.ones((n,n))
for i in range(n):
for j in range(n):
if (i % (n/5) == 0) and (j % (n/5) == 0):
checkers[i:i+(n/10), j:j+(n/10)] = 0.5
if (i % (n/5) == (n/10)) and (j % (n/5) == (n/10)):
checkers[i:i+(n/10), j:j+(n/10)] = 0.5
#inData = np.zeros((n,n))
inData = [[1,1,1],[1,1,1],[1,1,1]]
#inData[mask] = checkers[mask]
inImg = itk.GetImageFromArray(inData)
#refData = np.zeros((n,n))
#refData[0:50,0:50] = 1
refData = [[0,0,0],[1,1,1],[1,1,1]]
#refData[mask2] = checkers[mask]
refImg = itk.GetImageFromArray(refData)
(field, invField) = imgMetamorphosis(inImg, refImg, iterations=2, verbose=True)
defInImg = imgApplyField(inImg, field, size=refImg.GetSize())
imgShow(inImg)
imgShow(refImg)
imgShow(defInImg)
plt.imshow(itk.GetArrayFromImage(refImg - defInImg))
%matplotlib inline
from ndreg import *
import math
import numpy as np
import SimpleITK as itk
preErrors = []
postErrors = []
for iters in range(10):
n = 50
a = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
b = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
c = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
d = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
e = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
f = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
r = np.random.randint(low=5, high=15, size=1, dtype=np.int16)[0]
r2 = np.random.randint(low=5, high=15, size=1, dtype=np.int16)[0]
inData = np.zeros((n,n,n))
inData[(a-r):(a+r), (b-r):(b+r), (c-r):(c+r)] = 1
inImg = itk.GetImageFromArray(inData)
refData = np.zeros((n,n,n))
refData[(d-r2):(d+r2), (e-r2):(e+r2), (f-r2):(f+r2)] = 1
refImg = itk.GetImageFromArray(refData)
affine = imgAffineComposite(inImg, refImg, iterations=100)
defInImg = imgApplyAffine(inImg, affine, size=refImg.GetSize())
print("sim " + str(iters))
imgShow(inImg, numSlices=1)
imgShow(refImg, numSlices=1)
imgShow(defInImg, numSlices=1)
imgShow(refImg - defInImg, numSlices=1)
preError = imgMSE(inImg, refImg)
postError = imgMSE(defInImg, refImg)
print(preError)
print(postError)
print("")
print("")
preErrors.append(preError)
postErrors.append(postError)
%matplotlib inline
from ndreg import *
import math
import numpy as np
import SimpleITK as itk
preErrors2 = []
postErrors2 = []
for iters in range(10):
n = 50
a = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
b = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
c = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
d = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
e = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
f = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
g = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
h = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
i = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
j = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
k = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
l = np.random.randint(low=15, high=35, size=1, dtype=np.int16)[0]
r = np.random.randint(low=5, high=15, size=1, dtype=np.int16)[0]
z,y,x = np.ogrid[-d:n-d, -e:n-e, -f:n-f]
mask = x*x + y*y + z*z <= r*r
z,y,x = np.ogrid[-g:n-g, -h:n-h, -i:n-i]
mask2 = x*x + y*y + z*z <= r*r
z,y,x = np.ogrid[-j:n-j, -k:n-k, -l:n-l]
mask3 = x*x + y*y + z*z <= r*r
inData = np.zeros((n,n,n))
inData[(a-r):(a+r), (b-r):(b+r), (c-r):(c+r)] = 1
inImg = itk.GetImageFromArray(inData)
refData = np.zeros((n,n,n))
refData[mask] = 1
refData[mask2] = 1
refData[mask3] = 1
refImg = itk.GetImageFromArray(refData)
affine = imgAffineComposite(inImg, refImg, iterations=100)
defInImg = imgApplyAffine(inImg, affine, size=refImg.GetSize())
print("sim " + str(iters))
imgShow(inImg, numSlices=1)
imgShow(refImg, numSlices=1)
imgShow(defInImg, numSlices=1)
imgShow(refImg - defInImg, numSlices=1)
preError = imgMSE(inImg, refImg)
postError = imgMSE(defInImg, refImg)
print(preError)
print(postError)
print("")
print("")
preErrors2.append(preError)
postErrors2.append(postError)
We will quantitatively assess the population results by averaging the MSE for both types of simulations. We will qualitatively assess the population results by plotting the MSE from each simulation.
print("Good simulation pre-registration MSE:" + str(np.mean(preErrors)))
print("Good simulation post-registration MSE:" + str(np.mean(postErrors)))
print("Bad simulation pre-registration MSE:" + str(np.mean(preErrors2)))
print("Bad simulation post-registration MSE:" + str(np.mean(postErrors2)))
x = range(10)
plt.scatter(x, postErrors, c='b', s=40, label='Good simulations')
plt.scatter(x, postErrors2, c='r', s=40, label='Bad simulations')
plt.legend(loc='upper left')
plt.xlabel("Simulation number")
plt.ylabel("MSE")
plt.title('Post-Registration MSE')
We can see from the above results that our algorithm performed as expected on the simulations. The MSE metric is simply a sanity check and we can see that registration decreased the MSE for all simulations. We can also see from the plot that the good simulations performed slightly better than the bad simulations, overall. Now, we move on to the real data.
import nibabel as nb
for subject in range(864,910):
for session in range(1,3):
print(str(subject) + " " + str(session))
filename = "BNU_1_0025" + str(subject) + "_" + str(session) + "_rest.nii.gz"
brain = nb.load("../resampledBNU1/" + filename).get_data()
brainslice = brain[:,:,:,0]
inImg = itk.GetImageFromArray(brainslice)
reference = nb.load("MNI152_T1_2mm_brain.nii.gz").get_data()
refImg = itk.GetImageFromArray(reference)
affine = imgAffineComposite(inImg, refImg, iterations=100)
defInImg = imgApplyAffine(inImg, affine, size=refImg.GetSize())
newbrain = np.zeros(brain.shape)
newbrain[:,:,:,0] = itk.GetArrayFromImage(defInImg)
for step in range(1, brain.shape[3]):
brainslice = brain[:,:,:,step]
imgslice = itk.GetImageFromArray(brainslice)
regslice = imgApplyAffine(imgslice, affine, size=refImg.GetSize())
newbrain[:,:,:,step] = itk.GetArrayFromImage(regslice)
identity = np.diag([1,1,1,1])
newbrainimg = nb.Nifti1Image(newbrain, affine=identity)
nb.save(newbrainimg, "affinedBNU1/" + filename)
print(affine)
imgShow(inImg, numSlices=1)
imgShow(refImg, numSlices=1)
imgShow(defInImg, numSlices=1)
imgShow(refImg - defInImg, numSlices=1)
import nibabel as nb
for subject in range(921,982):
for session in range(1,3):
print(str(subject) + " " + str(session))
filename = "BNU_2_0025" + str(subject) + "_" + str(session) + "_rest.nii.gz"
brain = nb.load("../resampledBNU2/" + filename).get_data()
brainslice = brain[:,:,:,0]
inImg = itk.GetImageFromArray(brainslice)
reference = nb.load("MNI152_T1_2mm_brain.nii.gz").get_data()
refImg = itk.GetImageFromArray(reference)
affine = imgAffineComposite(inImg, refImg, iterations=100)
defInImg = imgApplyAffine(inImg, affine, size=refImg.GetSize())
newbrain = np.zeros(brain.shape)
newbrain[:,:,:,0] = itk.GetArrayFromImage(defInImg)
for step in range(1, brain.shape[3]):
brainslice = brain[:,:,:,step]
imgslice = itk.GetImageFromArray(brainslice)
regslice = imgApplyAffine(imgslice, affine, size=refImg.GetSize())
newbrain[:,:,:,step] = itk.GetArrayFromImage(regslice)
identity = np.diag([1,1,1,1])
newbrainimg = nb.Nifti1Image(newbrain, affine=identity)
nb.save(newbrainimg, "affinedBNU2/" + filename)
print(affine)
imgShow(inImg, numSlices=1)
imgShow(refImg, numSlices=1)
imgShow(defInImg, numSlices=1)
imgShow(refImg - defInImg, numSlices=1)




We can clearly see that all the results have been consistent with our expectations. Our understanding appears to be appropriate. I expect the algorithm to perform similarly well on other real datasets. Not only is it widely used in the fMRI analysis community, but we have also seen that it met are qualitative and quantitative expectations. We could perhaps improve performance further by diving deeper into some of the more specific options that are available in the tool, once we have a better understanding of our datasets themselves.